Liquid Benchmarking: A Platform for Democratizing the Performance Evaluation Process

نویسندگان

  • Sherif Sakr
  • Amin Shafaat
  • Fuad Bajaber
  • Ahmed Barnawi
  • Omar Batarfi
  • Abdulrahman H. Altalhi
چکیده

Performances evaluation, reproducibility and benchmarking represent crucial aspects for assessing the practical impact of research results in the computer science field. In spite of all the benefits (e.g., increasing impact, increasing visibility, improving the research quality) that can be gained from performing extensive experimental evaluation or providing reproducible software artifacts and detailed description of experimental setup, the required effort for achieving these goals remains prohibitive. In practice, conducting an independent, consistent and comprehensive performance evaluation and benchmarking is a very time and resource consuming process. As a result, the quality of published experimental results is usually limited and constrained by several factors such as: limited human power, limited time, or shortage of computing resources. We demonstrate Liquid Benchmarking as an online and cloud-based platform for democratizing the performance evaluation and benchmarking processes. In particular, the platform facilitates the process of sharing the experimental artifacts (software implementations, datasets, computing resources, benchmarking tasks) as services where the end user can easily create, mashup, run the experiments and visualize the experimental results with zero installation or configuration efforts. In addition, the collaborative features of the platform enables the user to share and comment on the results of the conducted experiments so that it can guarantee a transparent scientific crediting process. Furthermore, we demonstrate four benchmarking case studies that have been implemented using the Liquid Benchmarking platform on the following domains: XML compression techniques, graph indexing and querying techniques and string similarity join algorithms. c ©2015, Copyright is with the authors. Published in Proc. 18th International Conference on Extending Database Technology (EDBT), March 23-27, 2015, Brussels, Belgium: ISBN 978-3-89318-067-7, on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-nd 4.0 .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Cloud-Based Platform for Democratizing and Socializing the Benchmarking Process

Performances evaluation, benchmarking and reproducibility represent significant aspects for evaluating the practical impact of scientific research outcomes in the Computer Science field. In spite of all the benefits (e.g., increasing visibility, boosting impact, improving the research quality) which can be obtained from conducting comprehensive and extensive experimental evaluations or providin...

متن کامل

شناسایی شاخص‌های کلیدی عملکرد سیستم اطلاعات بیمارستان برای بهینه کاوی؛ یک مطالعه کیفی

Background: To have an optimal hospital information system, performance indicators used for evaluation must be recognized. Since defining proper performance indicators is one of the important principles in benchmarking. This study aimed to identify key indicators of hospital information system performance benchmarking. Methods: This qualitative content analysis study was conducted in 2016-2017...

متن کامل

Measuring Performance, Estimating Most Productive Scale Size, and Benchmarking of Hospitals Using DEA Approach: A Case Study in Iran

Background and Objectives: The goal of current study is to evaluate the performance of hospitals and their departments. This manuscript aimed at estimation of the most productive scale size (MPSS), returns to scale (RTS), and benchmarking for inefficient hospitals and their departments. Methods: The radial and non-radial data envelopment analysis (DEA) ap...

متن کامل

Towards the evaluation of the LarKC Reasoner Plug-ins

In this paper, we present an initial framework of evaluation and benchmarking of reasoners deployed within the LarKC platform, a platform for massive distributed incomplete reasoning that will remove the scalability barriers of currently existing reasoning systems for the Semantic Web. We discuss the evaluation methods, measures, benchmarks, and performance targets for the plug-ins to be develo...

متن کامل

Using Semantically Annotated Models for Supporting Business Process Benchmarking

In this paper we describe an approach for using semantic annotations of process models to support business process benchmarking. We show how semantic annotations can support the preparation of process benchmarking data by adding machine-processable semantic information to existing process models without modifying the original modeling language, conduct semantic analyses for the purpose of perfo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015